Online linear optimization with the log-determinant regularizer

نویسندگان

  • Ken-ichiro Moridomi
  • Kohei Hatano
  • Eiji Takimoto
چکیده

We consider online linear optimization over symmetric positive semidefinite matrices, which has various applications including the online collaborative filtering. The problem is formulated as a repeated game between the algorithm and the adversary, where in each round t the algorithm and the adversary choose matrices Xt and Lt, respectively, and then the algorithm suffers a loss given by the Frobenius inner product of Xt and Lt. The goal of the algorithm is to minimize the cumulative loss. We can employ a standard framework called Follow the Regularized Leader (FTRL) for designing algorithms, where we need to choose an appropriate regularization function to obtain a good performance guarantee. We show that the log-determinant regularization works better than other popular regularization functions in the case where the loss matrices Lt are all sparse. Using this property, we show that our algorithm achieves an optimal performance guarantee for the online collaborative filtering. The technical contribution of the paper is to develop a new technique of deriving performance bounds by exploiting the property of strong convexity of the log-determinant with respect to the loss matrices, while in the previous analysis the strong convexity is defined with respect to a norm. Intuitively, skipping the norm analysis results in the improved bound. Moreover, we apply our method to online linear optimization over vectors and show that the FTRL with the Burg entropy regularizer, which is the analogue of the log-determinant regularizer in the vector case, works well. key words— Online matrix prediction, log-determinant, online collaborative filtering

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Online matrix prediction for sparse loss matrices

We consider an online matrix prediction problem. FTRL is a standard method to deal with online prediction tasks, which makes predictions by minimizing the cumulative loss function and the regularizer function. There are three popular regularizer functions for matrices, Frobenius norm, negative entropy and log-determinant. We propose an FTRL based algorithm with log-determinant as the regularize...

متن کامل

Image Restoration by Variable Splitting based on Total Variant Regularizer

The aim of image restoration is to obtain a higher quality desired image from a degraded image. In this strategy, an image inpainting method fills the degraded or lost area of the image by appropriate information. This is performed in such a way so that the obtained image is undistinguishable for a casual person who is unfamiliar with the original image. In this paper, different images are degr...

متن کامل

Image Restoration with Compound Regularization Using a Bregman Iterative Algorithm

Some imaging inverse problems may require the solution to simultaneously exhibit properties that are not enforceable by a single regularizer. One way to attain this goal is to use a linear combinations of regularizers, thus encouraging the solution to simultaneously exhibit the characteristics enforced by each individual regularizer. In this paper, we address the optimization problem resulting ...

متن کامل

Inflation Behavior in Top Sukuk Issuing Countries: Using a Bayesian Log-linear Model

This paper focused on developing a model to study the effect of sukuk issuance on the inflation rate in top sukuk issuing Islamic economies at 2014‎. ‎For this purpose‎, ‎as the available sample size is small‎, ‎a Bayesian approach to regression model is used which contains key supply and demand side factors in addition to the outstanding sukuk volume as potential determinants of inflation rate...

متن کامل

Fenchel Duals for Drifting Adversaries

We describe a primal-dual framework for the design and analysis of online convex optimization algorithms for drifting regret. Existing literature shows (nearly) optimal drifting regret bounds only for the l2 and the l1-norms. Our work provides a connection between these algorithms and the Online Mirror Descent (OMD) updates; one key insight that results from our work is that in order for these ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1710.10002  شماره 

صفحات  -

تاریخ انتشار 2017